605,154 research outputs found
Optimization in task--completion networks
We discuss the collective behavior of a network of individuals that receive,
process and forward to each other tasks. Given costs they store those tasks in
buffers, choosing optimally the frequency at which to check and process the
buffer. The individual optimizing strategy of each node determines the
aggregate behavior of the network. We find that, under general assumptions, the
whole system exhibits coexistence of equilibria and hysteresis.Comment: 18 pages, 3 figures, submitted to JSTA
Recent developments in multilevel optimization
Recent developments in multilevel optimization are briefly reviewed. The general nature of the multilevel design task, the use of approximations to develop and solve the analysis design task, the structure of the formal multidiscipline optimization problem, a simple cantilevered beam which demonstrates the concepts of multilevel design and the basic mathematical details of the optimization task and the system level are among the topics discussed
MOON: A Mixed Objective Optimization Network for the Recognition of Facial Attributes
Attribute recognition, particularly facial, extracts many labels for each
image. While some multi-task vision problems can be decomposed into separate
tasks and stages, e.g., training independent models for each task, for a
growing set of problems joint optimization across all tasks has been shown to
improve performance. We show that for deep convolutional neural network (DCNN)
facial attribute extraction, multi-task optimization is better. Unfortunately,
it can be difficult to apply joint optimization to DCNNs when training data is
imbalanced, and re-balancing multi-label data directly is structurally
infeasible, since adding/removing data to balance one label will change the
sampling of the other labels. This paper addresses the multi-label imbalance
problem by introducing a novel mixed objective optimization network (MOON) with
a loss function that mixes multiple task objectives with domain adaptive
re-weighting of propagated loss. Experiments demonstrate that not only does
MOON advance the state of the art in facial attribute recognition, but it also
outperforms independently trained DCNNs using the same data. When using facial
attributes for the LFW face recognition task, we show that our balanced (domain
adapted) network outperforms the unbalanced trained network.Comment: Post-print of manuscript accepted to the European Conference on
Computer Vision (ECCV) 2016
http://link.springer.com/chapter/10.1007%2F978-3-319-46454-1_
Dynamics simulation of human box delivering task
Thesis (M.S.) University of Alaska Fairbanks, 2018The dynamic optimization of a box delivery motion is a complex task. The key component is to achieve an optimized motion associated with the box weight, delivering speed, and location. This thesis addresses one solution for determining the optimal delivery of a box. The delivering task is divided into five subtasks: lifting, transition step, carrying, transition step, and unloading. Each task is simulated independently with appropriate boundary conditions so that they can be stitched together to render a complete delivering task. Each task is formulated as an optimization problem. The design variables are joint angle profiles. For lifting and carrying task, the objective function is the dynamic effort. The unloading task is a byproduct of the lifting task, but done in reverse, starting with holding the box and ending with it at its final position. In contrast, for transition task, the objective function is the combination of dynamic effort and joint discomfort. The various joint parameters are analyzed consisting of joint torque, joint angles, and ground reactive forces. A viable optimization motion is generated from the simulation results. It is also empirically validated. This research holds significance for professions containing heavy box lifting and delivering tasks and would like to reduce the chance of injury.Chapter 1 Introduction -- Chapter 2 Skeletal Human Modeling -- Chapter 3 Kinematics and Dynamics -- Chapter 4 Lifting Simulation -- Chapter 5 Carrying Simulation -- Chapter 6 Delivering Simulation -- Chapter 7 Conclusion and Future Research -- Reference
Pareto-Path Multi-Task Multiple Kernel Learning
A traditional and intuitively appealing Multi-Task Multiple Kernel Learning
(MT-MKL) method is to optimize the sum (thus, the average) of objective
functions with (partially) shared kernel function, which allows information
sharing amongst tasks. We point out that the obtained solution corresponds to a
single point on the Pareto Front (PF) of a Multi-Objective Optimization (MOO)
problem, which considers the concurrent optimization of all task objectives
involved in the Multi-Task Learning (MTL) problem. Motivated by this last
observation and arguing that the former approach is heuristic, we propose a
novel Support Vector Machine (SVM) MT-MKL framework, that considers an
implicitly-defined set of conic combinations of task objectives. We show that
solving our framework produces solutions along a path on the aforementioned PF
and that it subsumes the optimization of the average of objective functions as
a special case. Using algorithms we derived, we demonstrate through a series of
experimental results that the framework is capable of achieving better
classification performance, when compared to other similar MTL approaches.Comment: Accepted by IEEE Transactions on Neural Networks and Learning System
- …